Search Results: "jones"

23 September 2012

Jonas Smedegaard: Spamming?

Hi! I have collected a small list of people that I want to give lifesign once in a while.
- and you are one of them Maybe we shook hands once, and for some obscure reason I kept (or found) your email address.
Maybe we are even related by blood (then you probably know :-) - e.g. ny brothers are on the list, as is both my parents
Few of you I actually never met in person, but you made a special impression on me - either in a phone conversation or an email exchange If you dislike being "bombarded" with emails from me - or if you are not at all the one I think you are, and really I write to a stranger - then please tell me! I will immediately remove you from the list, if you ask for it - and won't take it badly Kind regards
Jonas Smedegaard
This text is part of my friends scriblings.

22 September 2012

Jonas Smedegaard: Muziik!

Hi there! I'm sittin' here listening to Rickie Lee Jones. Just bought the CD today :-) Simply wanted you to know, that's all! Best regards
Jonas Smedegaard BTW Welcome aboard this pseudo-list (actually I'm doin' it directly from my e-mail client!) If you don't know who I am, or why you got this letter, then please contact me - there might be an error somewhere (and we might get to know each other ;-) I'm just an ordinary guy with an ordinary mail program, not a spammer
This text is part of my friends scriblings.

7 September 2012

Jonas Smedegaard: Asia 2011 - Khammam

My first morning in India I woke up very late in the house of Pavithrans parents. Pavithran was out organizing events for us, his father was at work, leaving me with his mother. She spoke little english - unsure if it was lack of skills or she was humble, or perhaps she just felt as shy as me in the situation. How to, ahem, do your thing at a restroom when there is no toilet paper but instead a bucket of water? I decided to not yell for help but use my imagination. How to eat breakfast local style? I knew from dinner previous night with Pavithran that local custom was to eat with my hands and that it (lucky for me) was tolerated in India to use left hand, so turned down the kind offer for a spoon. But were I supposed to mix rice with all the curries? Or one at a time? In which order? What if a curry was too hot to dip my fingers into? Was the liquid stuff to drink or to mix with the rice or eat afterwards? I asked for help, and first she just smiled at my alien useless language and then when I persisted, she patiently demonstrated with her hands in my food how to do. The only natural thing to do, really, and I dearly appreciate her help and patiency. But wauw, it was mind-blowing to me! Since early childhood, fingers in food is a forbidden thing: "Don't play with the food!". On top of that, having someone else handle my dish while at the table is something I associate with being very old and needing to be spoon-fed. In the afternoon we went for a small hike to an old stone fortress in the middle of Khammam - with a great view of the sun setting. Next day we went to Sarada Institute of Technology & Science where I gave my first talk to about 100 students. I had intended to provide concrete facts on Debian generally and on my pet project, Debian Pure Blends, but then the night before decided to radically shift focus to their situation in the early twenties - as best as I could imagine it. Pavithran had clearly expected a different style talk, but I liked it and believe it was received well by the audience as well. The teacher responsible for the event, Bhukya Jabber, afterwards asked for hints on running Debian at their computer lab. I suggested to not lock down access but instead make it easy to reinstall, and he explained how he was quite interested in a larger degree of learning-by-doing (which I had also promoted in my talk) but was constrained by curriculum dictated higher up in the educational system. Late that day Pavithrans father introduced me to CPM Khammam - the local offices and community center of the Communist Party - and to a colleague of his at the place, N. N. Rao. I instantly fell in love with the place and its atmosphere, and now have an open invitation to come back to spend 1-2 months to study and to collaborate with other users of the place (including some kids hosted there) on Free Software. Next morning we headed back by train to Hyderabad I am still amazed how radical it feels sticking my fingers into food. Not the physical feeling (I am not that disconnected from my body) but similar to a discovery I had as a teenager: After 7 years of piano lessons (and numerous other instruments less patiently) I gave up because I felt it was too difficult expressing personality through the instrument. After that I exclusively sang - as I had always done, but only now did I recognize my voice as a serious musical instrument. Similarly I now realized that eating with the fingers is not just yet another eating style like fork+knife or sticks - it is the natural one. Obviously, in hinsight. This text is part of my Asia 2011 scriblings.

22 August 2012

Gunnar Wolf: An industry commits suicide and blames us

[ once again, I am translating somebody else's material In this case, my good Costa Rican friend Carolina Flores. Please excuse my stylistic mistakes My English is far from native as you well know. But this material is worth sharing, and worth investing some tens of minutes doing a quick translation. If you can read Spanish, go read Caro's original entry ] Have you been to a music record store lately? I did so last Saturday, as a mere excercise. I was not planning on buying anything but I wanted to monitor things and confirm my suspicions. What was I suspicious of? First, that I would only find old records. And so it was: The only recent record I found was ...little broken hearts by Norah Jones. the second, that I would only find music for over 50 year old people. so it was: Were I there to look for a present for my father, I would have walked out with 10 good records. Third, that in the store nothing worth commenting would happen. About that last point, I should point out it was around 10 AM and the store had just opened its doors. Lets concede the benefit of doubt. I don't think many of you will remember, but in Barrio La California (where there is now a beauty parlour, almost in front of AM.PM) there was Auco Disco. In Auco Disco there was a guy specialized in rock (Mauricio Alice) and another one specialized in jazz (I don't remember his name). In that record you could always find rare records, but if they were not there, at least you were sure to find somebody to say: "No, we don't have that, but that's an excellent record, it's the best that [insert group here] have ever recorded because just afterwards they switched their guitar player, they had gone a bit south but with that record they are flying. But no, we don't have it; I can recommend you this record by [insert another group] because it has a guitar solo in track six that is amazing". It would happen more or less like that, which means, one would arrive to Auco Disco at 10 AM and leave around 5 PM with three new records, after having listened to a spectacular music selection. What happened to those stores? Were they killed by The Pirate Bay? That's the simplistic answer from the recording industry! The answer is that those stores never got anything from the industry but an invoice. The industry specially in prescindible markets such as ours was limited to hiring artists, taking care of them recording a sellable product, producing the object called record, and that's it. The more commercial radio stations were paid to program those songs as it cannot be casual that "Mosa, mosa" is the summer hit in all of Latin America, can it? but, record stores? Nothing. Lets carry on with that idea: Radio stations are paid to program said music. This idea should not lead us to believe that recording companies are to blame for bad taste. I won't reveal my sources, but I know the success of the "Locura autom tica" song by La Secta was a real example. Nobody paid for it. That song got to the number one because of its own merits(?) (you don't know the effort it took to find that thing, I cannot recommend it to you). Same thing happens with other stations that don't program reggaet n, that try to save the species, and where they play what we do like. But the thing is, everything we like is not available in any record store in this country. Then, even if we wanted to buy a record or give it as a gift to somebody, it is plain impossible. And don't tell me it's the same to present as a gift a link or a CD full of downloaded MP3 as it is to give a record with cover and booklet, wrapped in gift paper. I might be old-school, but the fetish object record still exists, not only because of its cover, but because of its sound. A 3MB MP3 is akin to drinking coffee dripping from a bag that has been used eight times with the same coffee beans. That format is the worst that has ever happened to music, and if we had any bit dignity we would never purchase digital files in Amazon or iTunes safe for MP3 with an acceptable compression level. That, if we could buy them, because not only that is allowed to us. As the musical industry has no interest in resolving ITS problem (that is not our problem, it is those companies') it has not even been resolved how to charge for a MP3 download that includes import fees (well, if downloading from here a MP3 from a USA-based file server can be considered importing goods into Costa Rica!!!) so we don't have to get dizzy entering into the nineties to Titi Online to discover there is nothing by Muse, Andrew Bird, The Killers, Death Cab for Cutie, Paramore, Bj rk... (believe me, I looked them all up, even Norah jones and La Secta. They also were not there). This all leads me to the question, which I present with all due respect (NOT): What the fuck do they expect us to do??? It is outrageous; above all because in the best case they will sell us a watery coffee download that won't allow us to get all the details a vinyl or less compression would give us. In the worst case, post-MP3 groups will end up recording music with no harmonics or hidden sounds, because, what for? Nobody will hear it. They even admit it: "Some musicians and audio engineers say the MP3 format is changing the way studios mix their recordings. They say the MP3 format "flattens" dynamics differences in tone and volume in a song. As a result, a great deal of new music sounds very much alike, and there is nothing as focusing to create a dynamic listening experience. Why working so hard in creating complex sound if nobody can detect it?" (Rolling Stone, The Death of High fidelity, December 26 2007, taken from here). That's why I am not surprised by Adri n's post regarding the sales of old records. The price has nothing to do with it. The causes are related to the fetish object record and what it means or does not mean for people that have never purchased one. Adri n also asks if somebody here keeps buying records. I answered that I would if the stores sold anything I like. I do it even after the nausea I feel while reading "This phonogram is an intellectual work protected in favor of its producer COPYING IT IN WHOLE OR PARTIALLY IS FORBIDDEN " (like that, uppercased, yelling to whoever is only guilty of having bought a record and defending the producer, not the artist). But I am sure that almost nobody buys records because doing so is no longer a gratifying experience; because if buying a record is clicking to wait 15 days for it to reach the mailbox, we prefer clicking on the download link. But there is another reason for people not to buy records anymore. In one of my talks regarding the dictatorship of the all rights reserved, I asked the 30 twenty-something-year-old students if any of them had ever bought a record. One answered he had, because he is an author and performer (cantautor in Spanish) and understands the effort that producing a record entails. The rest of them had never done so. Is it possible that said young people have never listened to real music? Is it possible that, were it not for concerts, what they consider music is a set of washed-out MP3 that are about to fill up 1TB of their computer? Does people no longer buy records because they cannot differentiate one sound from the other? It is not very clear for me where do I want to get to. The recording industry is despicable. An industry that instead of innovating sets its energy on suing adolescents for downloading songs, trying to pass laws restricting our freedoms in Internet, putting up DRMs making us hostages to our devices* and forcing us to listen just the aroma of music, deserves my whole contempt. If we add ot this that said industry won't allow us to legally download their breadcrumbs because it has not understood that Internet does not need a van crossing borders, besides my contempt they deserve my pity and my heartfelt condolence. But the condolence is also for music, real music, that which is not compressed under the shoe using a terrible format. It is also for independent musicians that have not realized that begging for a bit of space to that industry they just add to themselves the "despicable" tag, given they deserve the fruits of their work to enter their bank account. However, there are good things that have come out of this absurdity. Be it for those that have joined projects such as Aut mata (even if it is in MP3) and for dreams come true such as Musopen (that have achieved that the music that's in theory Public Domain becomes so in practice as well). Good for the Electronic Frontier Foundation and the list of lawyers willing to defend people accused of ilegally downloading music in the USA. Good for the Creative Commons licenses that allow free sharing. All those are growing solutions, although none of them allows me to buy the Panamanian Carlos M ndez's record. Thankfully, a friend of mine who knows I will never give a dime to Apple, bought the files for me in iTunes. I thank him deeply, although I would have prefered to go to Auco Disco and have Mauricio tell me that the 2007 EP I have from Carlos is better than the record he did on 2009. * My devices don't have DRM because I use free software. I also use the ogg file format. Image by verbeeldingskr8

16 August 2012

Matthew Garrett: Building personal trust with UEFI Secure Boot

The biggest objection that most people have to UEFI Secure Boot as embodied in the Windows 8 certification requirements is the position of Microsoft as the root of trust. But we can go further than that - putting any third party in a position of trust means that there's the risk that your machine will end up trusting code that you don't want it to trust. Can we avoid that?

It turns out that the answer is yes, although perhaps a little more complicated than ideal. The Windows 8 certification requirements insist that (on x86, at least) the key databases be completely modifiable. That means you can delete all the keys provided by your manufacturer, including Microsoft's. However, as far as the spec is concerned, a system without keys is in what's called "Setup Mode", and in this state it'll boot anything - even if it doesn't have a signature.

So, we need to populate the key database. This isn't terribly difficult. James Bottomley has a suite of tools here, including support for generating keys and building an EFI binary that will enrol them into the key databases. So, now you have a bunch of keys and the public half of them is in your platform firmware. What next?

If you've got a system with plug-in graphics hardware, what happens next is that your system no longer has any graphics. The firmware-level drivers for any plug-in hardware also need to be signed, and won't run otherwise. That means no graphics in the firmware. If you're netbooting off a plug-in network card, or booting off a plug-in storage controller, you're going to have similar problems. This is the most awkward part of the entire process. The drivers are all signed with a Microsoft key, so you can't just enrol that without trusting everything else Microsoft have signed. There is a way around this, though. The typical way to use Secure Boot is to provide a list of trusted keys, but you can also provide trusted hashes. Doing this involves reading the device ROM, generating a SHA256 hash of it and then putting that hash in the key database.

This is, obviously, not very practical to do by hand. We intend to provide support for this in Fedora by providing a tool that gets run while the system is in setup mode, reads the ROMs, hashes them and enrols the hashes. We'll probably integrate that into the general key installation tool to make it a one-step procedure. The only remaining problem is that swapping your hardware or updating the firmware will take it back to a broken state, and sadly there's no good answer for that at the moment.

Once you've got a full set of enrolled keys and the hashes of any option ROMs, the only thing to do is to sign your bootloader and kernel. Peter Jones has written a tool to do that, and it's available here. It uses nss and so has support for using any signing hardware that nss supports, which means you can put your private key on a smartcard and sign things using that.

At that point, assuming your machine implements the spec properly everything that it boots will be code that you've explicitly trusted. But how do you know that your firmware is following the spec and hasn't been backdoored? That's tricky. There's a complete open implementation of UEFI here, but it doesn't include the platform setup code that you'd need to run it on any x86 hardware. In theory it'd be possible to run it on top of Coreboot, but right now that's not implemented. There is a complete port to the Beagleboard hardware, and there's instructions for using it here. The remaining issue is that you need a mechanism for securing access to the system flash, since if you can make arbitrary writes to it an attacker can just modify the key store. On x86 this is managed by the flash controller being switched into a mode where all writes have to be carried out from System Management Mode, and the code that runs there verifies that writes are correctly authenticated. I've no idea how you'd implement that on ARM.

There's still some work to be done in order to permit users to verify the entire stack, but Secure Boot does make it possible for the user to have much greater control over what their system runs. The freedom to make decisions about not only what your computer will run but also what it won't is an important one, and we're doing what we can to make sure that users have that freedom.

comment count unavailable comments

17 July 2012

Sylvain Le Gall: Coding by example, migrating ocsoap to OASIS and ocamlbuild

In my effort to automate most of the stuff needed to release, I am working on a SOAP to OCaml converter. Richard W.M. Jones has kindly allowed me to take-over its project "ocsoap". The project is now located here and I am working on making it compatible with FusionForge SOAP. My first step was to make it compatible with OASIS and ocamlbuild. The project is Makefile based and needs some extra care to make it ocamlbuild based. The oasis-fication itself was a piece of cake. Just describe the dependencies of the project and make some extra choice like compiling the camlp5 extension to a .cma rather than a .cmo. You can see the _oasis and myocamlbuild.ml results, compared to the initial Makefile. The _oasis is simple:
OASISFormat:  0.3
Name:         ocsoap
Version:      0.7.1
Synopsis:     SOAP converter to OCaml code
Authors:      Richard W.M. Jones, Sylvain Le Gall
License:      LGPL-2.1 with OCaml linking exception
Plugins:      DevFiles (0.3), META (0.3), 
              StdFiles (0.3)
BuildTools:   ocamlbuild,
              cduce
BuildDepends: dynlink,
              pxp-lex-utf8,
              pxp-engine,
              netclient,
              cduce,
              extlib,
              calendar,
              pcre
Library ocsoap
  Path:       src
  Modules:    OCSoap
Library pa_ocsoapclientstubs
  Path:          src
  Modules:       Pa_ocsoapclientstubs
  BuildTools:    camlp5o
  BuildDepends:  camlp5
  FindlibParent: ocsoap
  FindlibName:   syntax
  CompiledObject: byte
  
Executable wsdl_validate
  Path:       src
  MainIs:     wsdl_validate.ml
  
Executable wsdltointf
  Path:       src
  MainIs:     wsdltointf.ml
Executable adwords_test1
  Path: examples/adwords
  BuildDepends: ocsoap
  MainIs: test1.ml
  Build$: flag(tests)
  Install: false
  BuildTools: camlp5o
  BuildDepends: ocsoap.syntax
Executable adwords_test2 
  Path: examples/adwords
  BuildDepends: ocsoap
  MainIs: test2.ml
  Build$: flag(tests)
  Install: false
  BuildTools: camlp5o
  BuildDepends: ocsoap.syntax
Executable adwords_examples 
  Path: examples/adwords
  BuildDepends: ocsoap
  MainIs: example.ml
  Build$: flag(tests)
  Install: false
  BuildTools: camlp5o
  BuildDepends: ocsoap.syntax
Document "api-ocsoap"
  Title: API reference of OCSoap
  Type: ocamlbuild (0.3)
  BuildTools+: ocamldoc
  XOCamlbuildLibraries: ocsoap
  XOCamlbuildPath:      src/
Most of it was generated using oasis quickstart, that helps you to write _oasis. This part was not tricky and mostly a translation of what the previous Makefile was explaining, in its language. Now comes the tricky part, translating Makefile rules into ocamlbuild rules. This part is more tricky because the general principle behind Makefile and ocamlbuild are not exactly the same. Let's start with a simple rule: compiling a CDuce .cd file into a .cdo. Here is the Makefile rule:
%.cdo: %.cd
        $(CDUCE) --compile $<
In the oasis-fication, we have added an extra complexity, because we move the source to src, which needs extra flags. Here is the ocamlbuild rule:
rule "cduce: %.cd -> %.cdo"
  ~prod:"%.cdo"
  ~dep:"%.cd"
  begin
    fun env build ->
      Cmd(S[cduce;
            T(tags_of_pathname (env "%.cd")++"cduce"++"compile");
            A"--compile";
            A"-I"; P(Filename.dirname (env "%.cd"));
            P (env "%.cd")])
  end
;;
It is a little bit more complex, lets explain it. The function rule create a rule with a name and a function that execute it. The labels ~prod and ~dep are the right and left parts of %.cdo: %.cd of the Makefile. When we call env "%cd", it replaces the % by the matching part of ~prod and ~dep. Next we return a Rule.action. The rule in this case is a command Cmd. This rules use a sequence S, which are the command line itself. The sequence is made of smaller pieces that can be a placeholder for tag content (T + what will trigger it), atom (A, typically command line option), and filename (P). For a file src/foo.ml, the command generated will
cduce $(TAG) --compile -I src src/foo.ml
where $(TAG) will be replaced by the content of flag ["file:src/foo.ml"; "cduce"; "compile"] content. But also by the content of flag ["cduce"; "compile"] content, the flag just need to be subset. For example if you want to add --verbose to the command line, just add flag ["cduce"; "compile"] (S[A"--verbose"]);; somewhere in myocamlbuild.ml. Next rules, we need to compile %.cdo to %.cmo. It is trickier because in this case, we want to have %.cmi eventually built before. It is not require to build it before, if there is no %.mli file. Here is the code to do that:
let cduce_mkstubs includes =
  Quote (S([cduce;
            S(List.map
                (fun fn -> S[A"-I"; P fn])
                includes);
            A"--mlstub"]));;
(* cduce < 0.3.9:  cdo2ml -static *) 
let cduce_compile_args env =
  [
    A"-c"; A"-package"; A"cduce";
    A"-pp"; cduce_mkstubs [Filename.dirname (env "%.cmi")];
    A"-I"; P(Filename.dirname (env "%.cmi"));
    A"-impl"; P(env "%.cdo")
  ]
;;
let cduce_prepare_compile env build =
  List.iter
    (function
         Outcome.Bad _ ->
           (* Fail but it just means that the .cmi will be generated 
            * during compilation.
            *)
           ()
         Outcome.Good _ ->
           ())
    (build [[env "%.cmi"]])
;;
rule "cduce: %.cdo -> %.cmo"
  ~prod:"%.cmo"
  ~dep:"%.cdo"
  begin
    fun env build ->
      cduce_prepare_compile env build;
      Cmd(S(
        [ocamlfind; ocamlc;
         T(tags_of_pathname (env "%.cdo")
           ++"cduce"++"ocamlc"++"compile"++"byte")]
        @ (cduce_compile_args env)))
  end
;;
The difference with the former rules, is that we call cduce_prepare_compile which in turns call (build [[env "%.cmi"]]). The call to build will ask ocamlbuild to compile src/foo.cmi, but we don't care about the result, i.e. in case of Outcome.Bad exn we don't fail. This way we don't stop the build process that will continue and produce a .cmo and .cmi just out of the .cdo. The .cdo itself is compiled as a .ml file with ocamlfind ocamlc except that we apply cduce --mlstub as a preprocessor A"-pp"; Quote(S[cduce; ..; A"--mlstub"]). The last rule that I will comment is the one that transform a .intf into a .ml. This rule is particular because it is totally different than the pieces of code of the original Makefile. Here are the couple of rules that was needed to translate .intf:
examples/adwords/%Service.cmx: examples/adwords/%Service.intf                                                       
   ocamlfind ocamlopt $(OCAMLOPTFLAGS) -c \                                                                          
     -pp "camlp5o ./pa_ocsoapclientstubs.cmo -impl" -c -impl $<
.depend: $(wildcard *.mli) $(wildcard *.ml) \
  $(wildcard examples/adwords/*.mli) $(wildcard examples/adwords/*.ml)
  $(OCAMLDEP) $^ > .depend
  for f in examples/adwords/*.intf; do \
    $(OCAMLDEP) \
    -pp "camlp5o ./pa_ocsoapclientstubs.cmo pr_o.cmo -impl" $$f \
    >> .depend; \
  done
Here is the ocamlbuild rule:
rule "ocsoap: %.intf -> %.ml"
  ~prod:"%.ml"
  ~deps:(if !ocsoap_dev then
           ["%.intf"; !pa_ocsoapclientstubs]
         else
           ["%.intf"])
  begin
    fun env build ->
      Cmd(S[Px !camlp5o; P !pa_ocsoapclientstubs; P "pr_o.cmo"; A"-impl";
            P(env "%.intf"); A"-o"; P(env "%.ml")])
  end
;;
Here we decided to translate directly the .intf into a .ml file. The good thing about ocamlbuild is that it has a powerfull dynamicy dependencies scheme. So here you will generate the .ml file, which in turns will generate a .ml.depends and then will be compiled the standard way. In the makefile, you need to compute the .depends file using a different process that will do everything before the compilation even starts (in fact, before the inclusion of the .depends). We also use the trick to use an OCaml printer (pr_o.cmo) with camlp5o, that will directly output the standard .ml file. Don't hesitate to post comment if you have question about OASIS and ocamlbuild Enjoy.

2 April 2012

Julian Andres Klode: Currying in the presence of mutable parameters

In the language introduced yesterday, mutable parameters play the important role of representing the side effects of actions in the type system and thus ensure referential transparency. One of the questions currently unaddressed is how to handle currying in the presence of mutable parameters. In order to visualise this problem, consider a function
    printLn(mutable IO, String line)
If we bind the the first parameter, what should be the type of the function, and especially important how do we get back the mutable parameter? Consider the partially applied form printLn1:
    printLn1(String line)
The mutability parameter would be lost and we could not do any further I/O (and the curried form would appear as a pure function, so a good compiler would not even emit a call to it) An answer might be to put the thing into the result when currying:
    printLn2(String line) -> mutable IO
But how can we explain this? In the end result, do we maybe have to use a signature like:
    printLn(String, mutable in IO io1, mutable out IO io2)
We could then introduce syntax to call that as
    printLn(string, mutable io)
Where as the mutable io argument basically expands to io and io1 for the first call, and for later calls to io1, io2 , and so on. It can also be easily curried by allowing currying to take place on variables not declared as output parameters. We can then curry the function as:
    printLn3(mutable in IO io1, mutable out IO io2)
    printLn4(mutable out IO io2)
If so, we can rename mutable back to unique and make that easier by introducing the unary operator & for two locations, just like Mercury uses ! for it. We could then write calls looking like this:
    printLn("Hello", &io);
    printLn("Hello", io, io1);
How out parameters are explained is a different topic; we could probably say that an out parameter defines a new variable. Another option is to forbid currying of mutable parameters entirely. This would have the advantage of maintaining the somewhat simple one parameter style. The programming language Clean does not provide any special syntactic sugar for having mutable variables. In Clean, the function gets a unique object and returns a unique object (noted by *). For example, the main entry point in a Clean program (with state) looks like this:
    Start:: *World -> *World
In short, the function Start gets a abstract world passed that is unique and at the end returns a new unique world. In Clean syntax, our example function would most likely have the signature:
    printLn :: String *IO -> *IO
You know have to either maintain one additional variable for the new unique object, which gets a bit complicated with time. On the other hand, you can do function composition on this (if you have a function composition operator that preserves uniqueness when available, as should be possible in Clean):
    printTwoLines :: *IO -> *IO
    printTwoLines = (printLn "World") o (printLn "Hello")
Function composition on mutable things however, does not seem like it is needed often enough in a functional programming language with a C syntax. People might also ask why monads are not used instead. Simon L Peyton Jones and Philip Wadler described monadic I/O in their paper Imperative Functional Programming (http://research.microsoft.com/en-us/um/people/simonpj/papers/imperative.ps.Z) in 1993, and it is used in Haskell (the paper was about the implementation in Haskell anyway), one of the world s most popular and successful functional programming languages. While monadic I/O works for the Haskell crowd, and surely some other people, the use of Monads also limits the expressiveness of code, at least as far as I can tell. At least as soon as you want to combine multiple monads, you will have to start lifting stuff from one monad to another (liftIO and friends), or perform all operations in the IO monad, which prevents obvious optimizations (such as parallelizing the creation of two arrays) in short dependencies between actions are more strict than they have to be. For a functional language targeting imperative programmers, the lifting part seems a bit too complicated. One of the big advantages of monads is that they are much easier to implement, as they do not require extensions to the type system and the Hindley-Milner type inference algorithm used by the cool functional programming languages. If you want uniqueness typing however, you need to modify the algorithm or infer the basic types first and then infer uniqueness in a second pass (as Clean seems to do it).
Filed under: General

11 December 2011

Stefano Zacchiroli: bits from the DPL for November 2011

Mako's IronBlogger is a great idea. I often find myself postponing blog posts for a very long time, simply out of laziness. IronBlogger provides a nice community incentive to counter (my) laziness and blogging more often. As a related challenge, we have to face the fact that different subsets of our communities use different media to stay informed: mailing lists, blog (aggregators), social media, IRC, etc. Disparities in how they stay informed are a pity and can be countered using multiple medias at a time. Although I haven't blogged very often as of lately, I managed to keep the Debian (Developer) community informed of what happens in "DPL land" on a monthly basis, by the means of bits from the DPL mails sent to d-d-a. While the target of bits mails perfectly fits d-d-a, there is no reason to exclude a broader public from them. After all, who knows, maybe we'll find the next DPL victim^W candidate among Planet readers! Bonus point: blogging this also helped me realize that my mails are not as markdown-clean as I thought they were. I still have no IronBlogger squad, though. (And sharing beers with folks in the Boston area is not terribly handy for me ). Anyone interested in setting up a BloggeurDeFer in the Paris area? (SCNR)
Dear Project Members,
another month has passed, it's time to bother you again about what has happened in DPL land in November (this time, with even less delay than the last one, ah!). Call for Help: press/publicity team I'd like to highlight the call for help by the press / publicity teams. They are "hiring" and have sent out a call for new members a couple of weeks ago. The work they do is amazing and is very important for Debian, as important as maintaining packages or fixing RC bugs during a freeze. It is only by letting the world know what Debian is and what we do, that we can keep the Project thriving. And letting the world know is exactly what the publicity and press teams do. If you're into writing, blogging, or simply have a crush for social media, please read the call and "apply"! Interviews November has apparently been the "let's interview the DPL" month. I've spent quite some time giving interviews to interested journalists about various topics. For both my embarrassment and transparency on what I've said on behalf of Debian, here are the relevant links: Assets Legal advice (work in progress) Relationships with others Miscellanea Thanks for reading thus far,
and happy hacking.
PS as usual, the boring day-to-day activity log is available at master:/srv/leader/news/bits-from-the-DPL.*

22 November 2011

Jonas Smedegaard: Asia 2011

In september I visited Brussels, Belgium. EPFSUG had kindly invited me to give a talk about FreedomBox in the European Parliament, and together we extended to also visit other organizations - both grassroots and more formal. I returned home exhausted but also fuelled with renewed energy and passion, and decided to engage in more such activities. Also, an old promise of mine to visit friends in the Aceh region of Indonesia (whom I met in Taiwan a few years back) rumbled in the back of my head. After a bit of juggling with travel routes and sponsoring options, my course was set for a 2 month journey with Debian Pure Blends as main theme:
  1. England: London
  2. India: Khammam, Hyderabad, Mangaluru, Bengaluru
  3. Vietnam: Ho Chi Minh City, Can Tho
  4. Thailand: Bangkok
  5. Malaysia: Putrajaya
  6. Indonesia: Surabaya, Yogyakarta, Bogor, Banda Aceh, Takengon
  7. England: Cambridge
Debian will be sponsoring East Asia part, and Debian enthusiasts in India seek sponsorship for India part.

Jonas Smedegaard: Asia 2011 - India

Arrival in a new country is always exciting. This was my first time ever to visit India, and although I have heard especially cultural bits and pieces, I was as usual nowhere near "well" prepared. How to fill out the registration forms (surprisingly needed in addition to the visa already gathered ahead of departure) when your only known address in India is on the laptop you completely drained the battery of during the flight? Luckily they tolerated the "address during stay" to be left blank. I got out in the heat of Hyderabad in the late afternoon, got a cab, and had my host instruct the cab driver - over the phone via roaming to Denmark - where to drop me off. After a long ride with cows and beautiful dressed pedestrians casually crossing the high speed road and a short pitstop at an ATM, I finally met Pavithran. Until then we only knew each other from casual online chat. (I should later learn that my first impression was quite unusual - not cows or clothes or chat, but roads capable of driving at high speed!) Pavithran checked me into a small hotel and we visited his home. It was in the middle of being rebuild, so impossible to stay at as had been our original planed. After a few hours of looking at the neighbourhood and talking about possible events during the week, we decided to cancel the hotel and instead go visit his parents in Khammam, some 5 hours away by bus This text is part of my Asia 2011 scriblings.

10 November 2011

Jonas Smedegaard: Asia 2011 - London

First event of my trip was somewhat a gimmick: Give a 30 minutes talk on FreedomBox to an audience of 1 person, in the transit area of Heathrow airport, London. A guy from India now living in London helped preparing my visit to India, an was eager to participate - so much that he was willing to drive out to the airport to meet me. The meeting failed, unfortunately: My flight got delayed, and my one-man "audience" got caught in other duties. :-( Fun to try nevertheless. Also looking back, that crazy attempt to squeeze in a meeting during a 1 hour stop-over was an indicator of the general pace of the India trip This text is part of my Asia 2011 scriblings.

14 September 2011

Mike Hommey: Building a custom kernel for the Nexus S

There are several reasons why someone would want to build a custom kernel for their Android phone. In my case, this is because I wanted performance counters (those used by the perf tool that comes with the kernel source). In Julian Seward s case, he wanted swap support to overcome the limited memory amount on these devices in order to run valgrind. In both cases, the usual suspects (AOSP, CyanogenMod) don t provide the wanted features in prebuilt ROMs. There are also several reasons why someone would NOT want to build a complete ROM for their Android phone. In my case, the Nexus S is what I use to work on Firefox Mobile, but it is also my actual mobile phone. It s a quite painful and long process to create a custom ROM, and another long (but arguably less painful thanks to ROM manager) process to backup the phone data, install the ROM, restore the phone data. And if you happen to like or use the proprietary Google Apps that don t come with the AOSP sources, you need to add more steps. There are however tricks that allow to build a custom kernel for the Nexus S and use it with the system already on the phone. Please note that the following procedure has only been tested on two Nexus S with a 2.6.35.7-something kernel (one with a stock ROM, but unlocked, and another one with an AOSP build). Also please note that there are various ways to achieve many of the steps in this procedure, but I ll only mention one (or two in a few cases). Finally, please note some steps rely on your device being rooted. There may be ways to do without, but I m pretty sure it requires an unlocked device at the very least. This post doesn t cover neither rooting nor unlocking. Preparing a build environment To build an Android kernel, you need a cross-compiling toolchain. Theoretically, any will do, provided it targets ARM. I just used the one coming in the Android NDK:
$ wget http://dl.google.com/android/ndk/android-ndk-r6b-linux-x86.tar.bz2
$ tar -jxf android-ndk-r6b-linux-x86.tar.bz2
$ export ARCH=arm
$ export CROSS_COMPILE=$(pwd)/android-ndk-r6b/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-
For the latter, you need to use a directory path containing prefixed versions (such as arm-eabi-gcc or arm-linux-androideabi-gcc), and include the prefix, but not gcc . You will also need the adb tool coming from the Android SDK. You can install it this way:
$ wget http://dl.google.com/android/android-sdk_r12-linux_x86.tgz
$ tar -zxf android-sdk_r12-linux_x86.tgz
$ android-sdk-linux_x86/tools/android update sdk -u -t platform-tool
$ export PATH=$PATH:$(pwd)/android-sdk-linux_x86/platform-tools
Building the kernel For the Nexus S, one needs to use the Samsung Android kernel tree, which happens to be unavailable at the moment of writing due to the kernel.org outage. Fortunately, there is a clone used for the B2G project, which also happens to contain the necessary cherry-picked patch to add support for the PMU registers on the Nexus S CPU that are needed for the performance counters.
$ git clone -b devrom-2.6.35 https://github.com/cgjones/samsung-android-kernel
$ cd samsung-android-kernel
You can then either start from the default kernel configuration:
$ make herring_defconfig
or use the one from the B2G project, which enables interesting features such as oprofile:
$ wget -O .config https://raw.github.com/cgjones/B2G/master/config/kernel-nexuss4g
From then, you can use the make menuconfig or similar commands to further configure your kernel. One of the problems you d first encounter when booting such a custom kernel image is that the bcm4329 driver module that is shipped in the system partition (and not in the boot image) won t match the kernel, and won t be loaded. The unfortunate consequence is the lack of WiFi support. One way to overcome this problem is to overwrite the kernel module in the system partition, but I didn t want to have to deal with switching modules when switching kernels. There is however a trick allowing the existing module to be loaded by the kernel: compile a kernel with the same version string as the one already on the phone. Please note this only really works if the kernel is really about the same. If there are differences in the binary interface between the kernel and the modules, it will fail in possibly dangerous ways. To use that trick, you first need to know what kernel version is running on your device. Settings > About phone > Kernel version will give you that information on the device itself. You can also retrieve that information with the following command:
$ adb shell cat /proc/version
With my stock ROM, this looks like the following:
Linux version 2.6.35.7-ge382d80 (android-build@apa28.mtv.corp.google.com) (gcc version 4.4.3 (GCC) ) #1 PREEMPT Thu Mar 31 21:11:55 PDT 2011
In the About phone information, it looks like:
2.6.35.7-ge382d80
android-build@apa28
The important part above is -ge382d80, and that is what we will be using in our kernel build. Make sure the part preceding -ge382d80 does match the output of the following command:
$ make kernelversion
The trick is to write that -ge382d80 in a .scmversion file in the kernel source tree (obviously, you need to replace -ge382d80 with whatever your device has):
$ echo -ge382d80 > .scmversion
The kernel can now be built:
$ make -j$(($(grep -c processor /proc/cpuinfo) * 3 / 2))
The -j part is the general rule I use when choosing the number of parallel processes make can use at the same time. You can pick whatever suits you better. Before going further, we need to get back to the main directory:
$ cd ..
Getting the current boot image The Android boot image living in the device doesn t contain only a kernel. It also contains a ramdisk containing a few scripts and binaries, that starts the system initialization. As we will be using the ramdisk coming with the existing kernel, we need to get that ramdisk from the device flash memory:
$ adb shell cat /proc/mtd awk -F'[:"]' '$3 == "boot" print $1 '
The above command will print the mtd device name corresponding to the boot partition. On the Nexus S, this should be mtd2.
$ adb shell
$ su
# dd if=/dev/mtd/mtd2 of=/sdcard/boot.img bs=4096
2048+0 records in
2048+0 records out
8388608 bytes transferred in x.xxx secs (xxxxxxxx bytes/sec)
# exit
$ exit
In the above command sequence, replace mtd2 with whatever the previous command did output for you. Now, you can retrieve the boot image:
$ adb pull /sdcard/boot.img
Creating the new boot image We first want to extract the ramdisk from that boot image. There are various tools to do so, but for convenience, I took unbootimg, on github, and modified it slightly to seemlessly support the page size on the Nexus S. For convenience as well, we ll use mkbootimg even if fastboot is able to create boot images. Building unbootimg, as well as the other tools rely on the Android build system, but since I didn t want to go through setting it up, I figured a minimalistic way to build the tools:
$ git clone https://github.com/glandium/unbootimg.git
$ git clone git://git.linaro.org/android/platform/system/core.git
The latter is a clone of git://android.git.kernel.org/platform/system/core.git, which is down at the moment.
$ gcc -o unbootimg/unbootimg unbootimg/unbootimg.c core/libmincrypt/sha.c -Icore/include -Icore/mkbootimg
$ gcc -o mkbootimg core/mkbootimg/mkbootimg.c core/libmincrypt/sha.c -Icore/include
$ gcc -o fastboot core/fastboot/ protocol,engine,bootimg,fastboot,usb_linux,util_linux .c core/libzipfile/ centraldir,zipfile .c -Icore/mkbootimg -Icore/include -lz
Once the tools are built, we can extract the various data from the boot image:
$ unbootimg/unbootimg boot.img
section sizes incorrect
kernel 1000 2b1b84
ramdisk 2b3000 22d55
second 2d6000 0
total 2d6000 800000
...but we can still continue
Don t worry about the error messages about incorrect section sizes if it tells you we can still continue . The unbootimg program creates three files: All that is left to do is to generate the new boot image:
$ eval ./mkbootimg $(sed s,boot.img-kernel,samsung-android-kernel/arch/arm/boot/zImage, boot.img-mk)
Booting the image There are two ways you can use the resulting boot image: one-time boot or flash. If you want to go for the latter, it is best to actually do both, starting with the one-time boot, to be sure you won t be leaving your phone useless (though recovery is there to the rescue, but is not covered here). First, you need to get your device in the fastboot mode, a.k.a. boot-loader:
$ adb reboot bootloader
Alternatively, you can power it off, and power it back on while pressing the volume up button. Once you see the boot-loader screen, you can test the boot image with a one-time boot:
$ ./fastboot boot boot.img
downloading 'boot.img'...
OKAY [ 0.xxxs]
booting...
OKAY [ 0.xxxs]
finished. total time: 0.xxxs
As a side note, if fastboot sits waiting for device , it either means your device is not in fastboot mode (or is not connected), or that you have permissions issues on the corresponding USB device in /dev. Your device should now be starting up, and eventually be usable under your brand new kernel (and WiFi should be working, too). Congratulations. If you want to use that kernel permanently, you can now flash it after going back in the bootloader:
$ adb reboot bootloader
$ ./fastboot flash boot boot.img
sending 'boot' (2904 KB)...
OKAY [ 0.xxxs]
writing 'boot'...
OKAY [ 0.xxxs]
finished. total time: 0.xxxs
$ ./fastboot reboot
Voil .

26 July 2011

Matthew Garrett: Further adventures in EFI booting

Many people still install Linux from CDs. But a growing number install from USB. In an ideal world you'd be able to download one image that would let you do either, but it turns out that that's quite difficult. Shockingly enough, it's another situation where the system firmware exists to make your life difficult.

Booting a hard drive is pretty easy. The BIOS reads the first 512 bytes off the drive, copies them to RAM and executes them. That code is then responsible for either starting your bootloader or identifying the currently active partition and jumping to its boot sector, but before too long you're in a happy place where you're executing whatever you want to. Life is good. So you'd think that CDs would work in a similar way. The ISO 9660 format even leaves a whole 32KB at the start of a filesystem, which is enough space for a pretty awesome bootloader. But no. This is not how CDs work. That would be far too easy.

Let's imagine we're back in the 90s. People want to be able to boot off CD without needing a boot floppy to do so. And you're a PC vendor with a BIOS that's been lovingly[1] forced into a tiny piece of flash and which has to execute out of an almost as tiny piece of RAM if you want your users to be able to play any games. Letting boot code read arbitrary content off the CD would mean adding a new set of interrupt hooks, and that's going to be even more complicated because CDs have a sector size of 2K while hard drives are 512 bytes[2] and who's going to pay to implement this and for the extra flash and RAM and look surely there has to be another way?

So, of course, another way was found. The El Torito specification defines a way for shoving a reference to some linear blocks into the ISO 9660 header. The BIOS reads those blocks into memory and then redirects either the floppy or hard drive access interrupts (depending on the El Torito type) to that region. The boot code can then proceed as if it had been read off a floppy without all the trouble of actually putting a floppy in the machine, and the extra code required in the system BIOS is minimal.

USB sticks, however, are treated as hard drives. The BIOS won't look for El Torito images on them. Instead, it'll try to execute a boot sector. That isn't there on a CD image. Sigh.

A few years ago a piece of software called isohybrid popped up and solved this problem nicely. isohybrid is a companion to isolinux, which itself is a bootloader that fits into an El Torito image and can then load your kernel and installer from CD. isohybrid takes an ISO image, adds an x86 boot sector and partition table and does some more fiddling to turn a valid ISO image into one that can be copied directly onto a USB stick and booted. The world suddenly becomes a better place.

But that's BIOS. EFI makes this easier, right? Right?

No. EFI does not make this easier.

Despite EFI being a modern firmware for the modern world[3], EFI implementations are not required to be able to understand ISO 9660. In fact, I've never seen one that does. FAT is all the spec requires, and FAT is typically all you get. Nor will EFI just execute some arbitrary boot code from the start of the CD. So, how does EFI boot off CD?

El Torito. Obviously.

It's not quite as bad as it sounds, merely almost as bad as it sounds. While the typical way of using El Torito for a long time was to use floppy or hard drive emulation, it also supports a "No emulation" mode. It also supports setting a type flag for your media, which means you can distinguish between images intended for BIOS booting and EFI booting. But the fact remains that your CD has to include an embedded FAT partition that then contains a bootloader that's able to read ISO 9660 because your firmware is too inept to handle that itself[4].

How about USB sticks? Thankfully, booting these on EFI doesn't require any boot sectors at all. Instead you just have to have a partition table, a FAT partition and a bootloader in a well known location in that FAT partition. The required partition is, in fact, identical to the one you need in an El Torito image. And so this is where we start introducing some extra hacks.

Like I said earlier, isohybrid fakes up an MBR and adds some boot code that points at the actual bootloader. It needs to do a little more on EFI. The first problem is that the isohybrid MBR partition has to cover the entire ISO 9660 filesystem on the USB stick so that the operating system can access it later, but the El Torito FAT image is inside that partition. A lot of MBR-based code becomes very unhappy if you try to set up a partition that's a subset of another partition. So we can't really use MBR. On to GPT.

GPT, or the GUID Partition Table, is the EFI era's replacement for MBR partitions. It has two main advantages over MBR - firstly it can cover partitions larger than 2TB without having to increase sector size, and secondly it doesn't have the primary/logical partition horror that still makes MBR more difficult than it has any right to be. The format is pretty simple - you have a header block 1 logical block into the media (so 512 bytes on a typical USB stick), and then a pointer to a list of partitions. There's then a secondary table one block from the end of the disk, which points at another list of partitions. Both blocks have multiple CRCs that guarantee that neither the header nor the partition list have been corrupted. It turns out to be a relatively straightforward modification of isohybrid to get it to look for a secondary EFI image and construct a GPT entry pointing at it. This works surprisingly well, and media prepared this way will boot EFI machines if burned to a CD or written to a USB stick.

There's a few quirks. Macs will show two boot icons for these CDs[6], one marked "EFI Boot" and one helpfully marked "Windows"[7], with the latter booting the BIOS El Torito image. That's a little irritating, but not insurmountable. The other issue is that older Macs won't look for boot loaders in the legacy locations. This is where things start getting horrible.

Back in the old days, Apple boot media used to have a special "blessed" folder. Attempting to boot would involve the firmware looking for such a folder and then using that to start itself up. Any folder in the filesystem could be blessed. Modern hardware doesn't use boot folders, but does use boot files. For an HFS+ filesystem, the inode of the bootloader is written to a specific offset in the filesystem superblock and the firmware simply finds that inode and executes it. And this appears to be all that older Macs support.

So, having written a small tool to bless an HFS+ partition, I tried the obvious first step of burning a CD with three El Torito images (one BIOS, one FAT, one HFS+). It failed. While Refit could see the bootloader in the HFS+ image, the firmware appeared to have no interest at all in booting off it. Yet Apple install media would boot. What was the difference?

The difference, obviously, was that these earlier Macs don't appear to support El Torito booting. The Apple install media contained an Apple partition map.

The Apple partition map (APM) is Apple's legacy partition table format. Apple mostly dropped it when they went to x86, where it's retained for two purposes. The first is for drives that need to be shared between Intel Macs and PPC ones. The second seems to be for their install DVDs. Some further playing revealed that burning a CD with an APM entry pointing at the HFS+ filesystem on the CD gave me a boot icon. Problem solved?

Not really. Remember how I earlier mentioned that ISO 9660 leaves 32KB at the start of the image, and that an isohybrid image then writes an MBR and boot sector in the first 512 bytes of that, and the GPT header starts 512 bytes into a drive? That means that it's easy to produce an ISO that has both a boot sector, MBR partition table and GPT. None of them overlap. APM, on the other hand, has a header that's located at byte 0 of the media, overlapping with the boot sector. And it has a partition listing that's located at sector 1, overlapping with the GPT. Is all lost?

No. Merely sanity.

The first thing to remember is that the boot sector is just raw assembler. It's a byte stream that's executed by the CPU. And there's a lot of things you can tell a CPU to do that result in nothing happening. Peter Jones pointed out that the only bits of the AFP header you actually need are the letters "ER", followed by the sector size as a two byte big endian integer. These disassemble to harmless instructions, so we can simply move the boot code down a little and stick these at the beginning. A PC that executes it will read straight through the bizarre (but harmless) Apple bytes and then execute the real boot code.

The second thing that's important here is that we were just given the opportunity to specify the sector size. The GPT is only relevant when the image is written to a USB stick, so assumes a sector size of 512 bytes. So when the GPT starts one sector into the drive, it's actually starting 512 bytes into the drive. APM also starts one sector into the drive, but we can simply put a different sector size into the header and suddenly we're able to choose where that's going to be. 2K seems like a good choice, and so the firmware will now look for the header at byte 2048.

That's still in them middle of the GPT partition listing, though. Except we can avoid that as well. GPT lets you specify where the partition listing starts and doesn't require it to be immediately after the header. So we can offset the partition listing to, say, byte 8192 and leave a hole for the Apple partition map.

And, shockingly, this works. Setting up a CD this way gives a boot icon on old Macs. On new Macs, it gives three - one for legacy boot, one for EFI boot via FAT and one for EFI boot via HFS. Less than ideal, but eh. The one remaining problem is that this doesn't work for USB sticks (the firmware sees the GPT and ignores the APM), so we also need to add a GPT entry for the HFS+ partition. Job done.

So, it is possible to produce install media that will work if burned to CD or written to a USB stick. It's even possible to produce a version that will work on Macs, as long as you're willing to put up with three partition tables and an x86 boot sector that doubles as an APM header. And patches to isohybrid to do all of this will be turning up as soon as I tidy the code to the point where it works without having to hack in offsets by hand.

[1] Insert some other adverb here if you feel like it
[2] Why yes, 15 years later BIOSes still tend to assume 512 bytes. Which is why your 4K sector disk is much harder to work with than you'd like it to be.
[3] Ever noticed how the modern world involves a great deal of suffering, misery and death? EFI fits into that world perfectly.
[4] Obviously if you want your media to be bootable via both BIOS and EFI you need to produce a CD with two El Torito images. BIOS systems should ignore the image that says it's for EFI, and EFI systems should ignore the BIOS one. Some especially creative BIOS authors[5] have decided that users shouldn't have their choices limited in such a way, and so pop up a screen that says:
1.
2.
Select CD-ROM boot type:

and wait for the user to press a key. The lack of labels after the numbers is not a typographical error on my part.
[5] Older (pre-2009, and some 2009 models) Apple hardware has this bug if a dual-El Torito CD is booted via the BIOS compatibility layer. This is especially unfortunate because said machines often fail to provide a working keyboard emulation at this stage, resulting in you being stuck forever at an impressively unhelpful screen. This isn't a Linux bug, since it's happening before we've run any of our code at all. It's not even limited to Linux. 64-bit install media for Vista SP1, Windows 7 and Server 2008 all have similar El Torito layout and all trigger the same bug on Apple hardware. Apple's aware of this, and has resolved the issue by declaring that these machines don't support 64 bit Windows.
[6] Even further investigation reveals that Apple will show you as many icons as there are El Torito images, which is a rare example of Apple giving the user the freedom to brutally butcher their extremities if they so desire
[7] "Windows" is Apple code for "Booting via BIOS compatibility". The Apple boot menu will call any filesystem with a BIOS boot sector Windows.

29 June 2011

Jonas Smedegaard: The year of the FreedomBox

I am involved in developing something coined as the FreedomBox. Explaining it to my mum the other day, she wisely asks if it, albeit clearly an exciting challenge we've picked, really is doable? Surely the World has gone sour, but is such radical change even possible? Annoying question! And clever :-) For some years, tech media has tried predict when Linux have reached momentum for ordinary users. That current or next year was to become the Year of the Linux Desktop. Funny thing, seen in restrospect, is how "the year" kept being postponed, and when finally OLPC paved the way for the boom of Netbooks and arguably we got there, the World had moved on: Now Free Sofware is as common and as usable on desktops as commercially driven systems. It is taken for granted, not praised, and we look forward for the Next Big Challenge (as geeks) or Next Big Excitement (as users). Perhaps a similar fate is to be expected for FreedomBox: Initially when sparking our interest, and repeatedly since although we still today have nothing concrete to show - indeed even before we started hacking on it or knew the name of our dreams - our Prophet declared the Year of the FreedomBox. Not explicitly, but the cleverly phrased "right now." I am excited and proud to be working on FreedomBox, and foolishly hope it will be ready for worldwide consumption in a very recent "right now" - well aware that most likely it won't happen like that. Thing is, I don't really care how it happens, if only something does. This is due to the way we work: Hacking may appear from outside as larger projects, but really is juggling piles of small pieces for an eternal jigsaw puzzle, with each piece usable in multiple ways and across projects. I do not work only on FreedomBox, just as I did not work only on Sugar before that, or only with Debian as my platform: I work on Freedom-enabling technologies and ways to frame them for the Real World to use them with a vengeance. I sure hope you take the results for granted. That's true success!

7 January 2011

Paul Wise: Another year, another log entry

It has been almost a full year since my last log entry. It has been a busy work year, I attended some nice conferences and did minimal FLOSS stuff. On the work side of things I was a third of an Australian VoIP startup that came and went. I setup Debian servers, installed OpenSIPS and associated software, wrote OpenSIPS scripts, wrote peripheral software and did customer support. We had a good thing going there for a while, some fans on the Whirlpool forums but in the end there wasn't enough money for the requisite marketing and local market circumstances were squeezing Australian VoIP providers anyway. On the conference side of things I went to LCA 2010, the Thai Mini-DebCamp 2010, DebConf10 and FOSSASIA 2010. Had a great time at all of them. At LCA 2010 in windy Wellington, New Zealand the distributions summit organised by Martin Krafft was one of the highlights. It was dominated by Debian/Ubuntu talks but there were some other interesting ones, especially the one on GoboLinux's integration of domain-specific package managers. Also excellent were the keynotes given by Gabriella Coleman (Best & worst of times), Mako Hill (Antifeatures) and others, which I felt gave LCA an improved and very welcome focus on software freedom. There were quite a few Debian folks at LCA, it was great to hang out with them during the week and afterwards. Monopedal sumo with mako and others was hilarious fun. At the Thailand Mini-DebCamp 2010 in Khon Kaen, I was glad to see Andrew Lee (Taiwan) and Christian Perrier (France) again and meet Yukiharu YABUKI (Japan) and Daiki Ueno (Japan). In addition to the five international folks, there were quite a few locals, including Thailand's currently sole Debian member, Theppitak Karoonboonyanan. The event was hosted at Khon Kaen University and opened with my talk about the Debian Social Contract and the Debian Free Software Guidelines. This was followed by a number of talks about Debian package building, a 3-day BSP where we touched 57 bugs, a great day of sightseeing and talks about i18n, derivative distros, keysigning, mirrors, contribution and a discussion about DebConf. During the week there was also the usual beersigning, combined with eating of unfamiliar and "interesting" Thai snacks. After the conference Andrew and I roamed some markets in Bangkok and got Thai massages. Beforehand I also visited a friend from my travels on the RV Heraclitus in Chiang Mai, once again experiencing the awesomeness of trains in Asia, unfortunately during the dry season this time. I took a lot of photos during my time in Thailand and ate a lot of great and spicy food. As a vegetarian I especially appreciated the organiser's efforts to accommodate this during the conference. At DebConf10 in New York City, by far the highlight was Eben Moglen's vision of the FreedomBox. Negotiating the hot rickety subways was fun, the party at the NYC Resistor space was most excellent, Coney Island was hot and the water a bit yuck, zack threw a ball, the food and campus was really nice. Really enjoyed the lintian BoF, ARM discussions, shy folks, GPLv3 question time, paulproteus' comments & insights, wiki BoF, puppet BoF, derivatives BoF, Sita, astronomy rooftop, cheese, virt BoF, Libravatar, DebConf11, Brave new Multimedia World, bagels for breakfast, CUT, OpenStreetMap & lightning talks. Having my power supply die was not fun at all. Afterwards I hung out with a couple of the exhausted organisers, ate awesome vegan food and fell asleep watching a movie about dreams. One weird thing about DebConf10 was that relatively few folks used the DebConf gallery to host their photos, months later only myself and Aigars Mahinovs posted any photos there. At FOSSASIA 2010 in H Ch Minh City (HCMC) was a mini-DebConf. I arrived at the HCMC airport and was greeted by Huyen (thanks!!), one of FOSSASIA's numerous volunteers, who bundled me into a taxi bound for the speakers accommodation and pre-event meetup at The Spotted Cow Bar. The next day the conference opened at the Raffles International College and after looking at the schedule I noticed that I was to give a talk about Debian that day. Since I didn't volunteer for such a talk and had nothing prepared, the schedule took me by surprise. So shortly after an awesome lunch of Vietnamese pancakes we gathered some Debian folks and a random Fedora dude and prepared a short intro to Debian. The rest of the day the highlights were the intro, video greetings and the fonts, YaCy and HTML5 talks. The next day the Debian MiniConf began with Arne Goetje and everyone trying to get Debian Live LXDE USB keys booted on as many machines in the classroom as possible (many didn't boot). Once people started showing up we kicked off with Thomas Goirand's introduction to the breadth of Debian. Others talked about Debian pure blends, Gnuk and building community and packages. The second last session was about showing the Vietnamese folks in the room how to do l10n and translation since Debian had only one Vietnamese translator (Clytie Siddall). After manually switching keyboard layouts (seems LXDE doesn't have a GUI for that) on the English LXDE installs, the two Cambodian folks were able to do some Khmer translation too. This was a great session and it resulted in two extra Vietnamese translators joining Debian. It went over time so I didn't end up doing my presentation about package reviewing. We rushed off to a university where the random Fedora ch^Wambassador was hosting a Fedora 14 release party in a huge packed classroom. There were a lot of excited faces, interesting and advanced questions and it was in general a success. Afterwards we had some food, joined up with some other speakers and ended up in a bar in the gross tourist zone. On the final day we hung around in the Debian room, went downstairs for the group photo and final goodbyes. Later we found a place with baked goods, coffee and juices and navigated the crazy traffic to a nice local restaurant. The next morning Arne & I went to the airport, others went on a Mekong Delta tour and Jonas hung out with the organisers. I took less photos than at other events but got a few interesting ones. I avoided doing a lot of FLOSS stuff over the last year, I hope to work on some things in the coming months; I'm also planning some interesting travel and acquiring some new technological goods, more on those in some later posts.

23 November 2010

Biella Coleman: An unlikely story about a pit bull attack, free software, and a New York Times reporter

If I told you that in the last two days, I have been caught in a vortex of coincidence, a vortex composed of pit bulls, free software, diaspora (the software), mold, and a New York Times reporter, I bet you would think not likely. So the story started on Jet Blue, which offers snacks, lots of them, and Direct TV. Since I don t have TV I kinda go on a binge, watching all sorts of shows as I make my way home. I watched a pretty distributing but interesting documentary on Jim Jones on CNN and a show on Animal Planet on pit bulls and parolees. When I rolled into my my current digs in northern Manhattan (I am currently banished from my downtown apt due to mold, but that is a whole other story), there was a dinner party well underway. At some point in the evening prompted by me, we talked pit bulls as my friends want to get one but their family has issued a threat of disavowal if they do. The next morning, I was scoping out the website for the Animal Planet show as I was intrigued by it and frankly I kinda like pit bulls (maybe less now although I think they are unfairly maligned). Five minutes into pursuing the site, I hear screeches from hell. It sounds like a woman is being attacked. And she is. A woman right outside of my window was being attacked by a pit bull. So I am staying with a friend, an open source developer, Karl Fogel and good soul that he is, he runs out to help the lady (since I have been subject to 5 weeks of sickness due to mold or so that is what we think it is was enough for me; I could not stomach the idea of getting bit so I played the role of concerned spectator). It took minutes upon minutes, really just too many minutes to get the pit bull off, even a brick pounded against his head failed (apparently, a cigarette or match held to the throat does the trick, which I found out later). Eventually, the dog was extracted, a huge team of cops showed up, the dog was whisked away, the victim taken to the hospital, and life returned to calm and quiet. So the next day, I was being interviewed by a New York Times reporter Jim Dwyer who wrote a story about Diaspora for the New York Times back in the summer, helping to propel it from relative obscurity to near insta-fame (one of the Diaspora developers, Max, was my student). We were running out of time (I had another appointment) so I asked him if he lived in northern Manhattan as that is what his bio page indicates. He confirmed, I explained I was up there and that we could meet up there later to finish up. He inquired what part, I told him roughly where I was, he remarked he was near there, and so naturally I told him about the crazy pit bull attack I witnessed from my window as I can t shut my trap when it comes to things like that. Well yes you know what is coming next next: he was there, helping Karl (and others) deal with the pit bull attack. He lives nearby and heard the shrieks of agony and came out to aid. All and all it was pretty horrific. He also met Karl in so far as Karl gave him his phone number and email just in case he was needed as a witness (Karl had to dash off to catch a plane). Well, the funny thing, or as you also might guess: Jim, who is doing some more writing on tech, free software etc, should really talk to Karl given his key role in the community, so they already met, although under odd and terrible circumstances. I am not sure if I am more wigged out by the fact that I was reading about pit bulls when the attack happened or whether the reporter I was interviewed by was there along side with a free software developer he really needs to interview. Whatever the case, I kinda hope the vortex of coincidence now leaves me to hit someone else (sans any horrible attack). Or else, as Karl noted in the blog comments, I will have to be very careful about what shows I watch:
Amen to that! Enough with the coincidence vortex. As I said to Biella in IRC later: Do us a favor don t watch any shows about nuclear attacks on New York, okay

5 October 2010

Russell Coker: Open Source Learning

Richard Baraniuk gave an interesting TED talk about Open Source Learning [1]. His project named Connexions which is dedicated to the purpose of creating Creative Commons free textbooks is a leader in this space [2]. He spoke about Catherine Schmidt-Jones who wrote 197 modules and 12 courses on music [3], that s a very significant amount of work! He also mentioned the translation of the work into other languages. I wonder how well the changes get merged back across the language divide. We have ongoing disputes in the free software community about whether various organisations do enough work to send patches back upstream, this seems likely to be more of a problem in situations where most of the upstream authors can t even understand the language in which the changes are written and when the changes involve something a lot more subtle than an change to an algorithm. This would be particularly difficult for Chinese and Japanese as those languages seem to lack quality automatic translation. He mentioned Teachers Without Borders [4] in passing. Obviously an organisation that wants to bring education to some of the poorer parts of the world can t have a curriculum that involves $250 of text books per year for a high school student (which was about what my parents paid when I was in year 12) or $500 of text books per year for a university student (which might be a low estimate for some courses as a single text can cost more than $120). Free content and on-demand printing (or viewing PDF files on a OLPC system) can dramatically lower the cost of education. It s widely believed that free content without the ability to remix is cultural imperialism. This is apparently one of the reasons that the connexions project is based on the Creative Commons Attribution license [5]. So anyone anywhere can translate it, make a derivative work, or collate parts of it with other work. I expect that another factor is the great lack of success of all the various schemes that involve people contributing content for a share of the profits, the profits just don t match the amount of work involved. Philanthropy and reputation seem to be the only suitable motivating factors for contributing to such projects. One of the stated benefits of the project is to have computer based content with live examples of equations. Sometimes it is possible to just look at an equation and know what it means, but often more explanation is required. The ability to click on an equation, plug in different values and have them automatically calculated and possibly graphed if appropriate can make things a lot easier. Even if the result is merely what would be provided by reading a text book and spending a few minutes with a scientific calculator the result should be a lot better in terms of learning as the time required to operate a calculator can break the student s concentration. Even better it s possible to have dynamic explanations tailored to the user s demand. To try this out I searched on Ohm s Law (something that seems to be unknown by many people on the Internet who claim to understand electricity). I was directed to an off-site page which used Flash to display a tutorial on Ohm s Law, the tutorial was quite good but it does seem to depart from the free content mission of the project to direct people off-site to proprietary content which uses a proprietary delivery system. I think that the Connexions project could do without links to sites such as college-cram.com. One of the most important features of the project is peer review lenses . The High Performance Computing Lens [6] has some good content and will be of interest to many people in the free software community but again it requires Flash. One final nit is the search engine which is slow and not very useful. A search for engine returned lots of hits about engineering which isn t useful if you want to learn about how engines work. But generally this is a great project, it seems to be doing a lot of good and it s got enough content to encourage other people and organisations to get involved. It would be good to get some text books about free software on there!

11 August 2010

Romain Beauxis: Mingw32-ocaml 3.12.0

An updated version of the ocaml cross-compiler package, based on ocaml 3.12.0, has just been uploaded to Debian experimental ! Any report and test on the package would be very welcome ! I have personally tested it with Liquidsoap and built a win32 version of the software. Since this build implies many external modules as well as C objects, I am pretty confident in the cross-compiler uploaded to experimental.. About the cross-compiler: the ocaml cross-compiler is the result of the hard work done by Richard Jones for Fedora. The Debian package is merely a backport (and adaptation to ocaml 3.12.0) of his patches. If you care about the future of the cross-compiler, the best you can do is work with upstream to find how to push the needed changes there in order to have a plain support for it. I have personally no time for starting this process but I could try to describe the patches to an intereste contributor. Warning: some are REALLY hacky :-)

25 June 2010

Anand Kumria: The Rebound

Another coming of age movie. All about age differences and timing. Somewhat ironical given the relationship Catherine Zeta-Hones is in. Had some realism from what I have observed of relationships with big age differences but this is also very similiar to Prime (with Uma Thurman) except that this also involved children (and thus you get the associated children gags). Seems to really play off of Aram Finkelstein (played by Justin Bartha), one of the protaganists, being Jewish. Just like Prime. The Jewishness only added two (or, maybe, three). I felt the film could have done without it. It might have forced the writers to come up with something a bit more original. For me, the reason to watch this was Catherine Zeta-Jones who, as one of the characters notes is a MILF. The chemistry between two leads is really what saves this film.

28 February 2010

Russ Allbery: Review: Coders at Work

Review: Coders at Work, by Peter Seibel
Publisher: Apress
Copyright: 2009
ISBN: 1-4302-1948-3
Format: Trade paperback
Pages: 601
Coders at Work is a collection of edited interviews by Peter Seibel (probably best known previously for his book Practical Common Lisp) of an eclectic and excellent collection of fifteen programmers. It opens with an interview with Jamie Zawinski (one of the original Netscape developers) and closes with Donald Knuth. In between, the interview subjects range in programmer generations from Fran Allen (who started at IBM in 1957) and Bernie Cosell (one of the original ARPANET developers) to Brad Fitzpatrick (LiveJournal founder and original developer). Techniques and preferences also range widely, including two people involved in JavaScript development and standardization (Brendan Eich and Douglas Crockford), a functional programming language designer and developer (Simon Peyton Jones), language designers and standardizers such as Guy Steele, and people like Dan Ingalls who have a different experimental approach to programming than the normal application development focus. All of the interviewees are asked roughly the same basic questions, but each discussion goes in different directions. Seibel does an excellent job letting the interview subjects shape the discussion. Two things immediately stood out for me about this book. First, it's huge, and that's not padding. There are just over 600 pages of content here, much of it fascinating. The discussions Seibel has are broad-ranging, covering topics from the best way to learn programming to history and anecdotes of the field. There's some discussion of technique, but primarily at the level of basic approaches and mindset. One typical question is how each programmer organizes their approach to reading code that isn't familiar with them. Each interviewee is also asked for book recommendations, for their debugging techniques, for their opinions on proving code correct, and how they design code. The participants are so different in their backgrounds and approaches that these conversations go in fifteen different directions. This is one of the most compelling and engrossing non-fiction books I've read. Second, the selection of interview subjects, while full of well-known names in the field, is not the usual suspects. While I'm interested in the opinions of people like Larry Wall and Guido van Rossum, I've already heard quite a lot about how they think about programming. That's material that Coders at Work doesn't need to cover, and it doesn't. Many of the interview subjects here are people I'd heard of only vaguely or not at all prior to this book, often because they work in an area of programming that I'm not yet personally familiar with. Those who I had heard of, such as L. Peter Deutsch, I often knew in only one context (Ghostscript in that case) and was unfamiliar with the rest of their work. This gives the book a great exploratory feel and a lot of originality. There is so much good material here that it's hard to give a capsule review. This is a book I'm highly likely to re-read, taking more detailed notes. There's entertaining snarking from Jamie Zawinski and Brendan Eich, fascinating history of the field (including in gender balance) from Fran Allen, and an intriguing interview with Joe Armstrong (creator of Erlang), who seems to have a far different attitude towards languages and libraries than the other interviewees. Every interview is full of gems, bits of insight that I now want to research or play with. A couple of examples come to mind, just to provide a feel of the sort of insights I took out of the book. In the interview with Joshua Bloch, who does a lot of work on library APIs, he mentions that empathy is one of the most important skills for designing an API. You have to be able to put yourself in the shoes of the programmer who's going to use the API and understand how it will feel to them. This came up in the context of a discussion about different types of programmers, and how programmers can be good at different things; the one who can do low-level deep optimization may not have that sense of empathy. Another example: Bernie Cosell talked about how he did debugging, and how he got a reputation for being a fantastic debugger who was able to fix just about anything. He confessed that he often reached a portion of the code that he didn't understand, that seemed too complex and tricky for what it was attempting to accomplish, and rather than trace through it and try to understand it, he just rewrote it. And after rewriting, the bug was often gone. It wasn't really debugging, but at the same time it's close to the more recent concept of refactoring. Several of the interview subjects commented on a subjective feeling of complexity and how when it gets too high that's a warning sign that code may need to be rethought and rewritten. A third example: I was fascinated by the number of interviewees who said that they used printf, assert, and eyeballs to debug rather than using any more advanced debugging tools. The former Lisp developers would often bemoan the primitiveness of tools like gdb, but many of them found that print statements and thinking hard about the code were usually all that's needed. (There was also a lot of discussion about test suites and test-driven development.) The general consensus was that concurrency problems were the hardest to debug; they made up a disproportional number of the responses to Seibel's question about the hardest bug the programmer ever had to track down. I could go on giving similar examples at great length, but apart from the specific bits of wisdom, the strongest impact this book made on me was emotional. Coders at Work is full of people who love programming and love software, and that enthusiasm, both in general and for specific tools and ideas, comes through very clearly. I found it inspiring. I realized while reading this book, and I suspect I'm not alone among programmers in this, that I largely stopped learning how to program a few years back and have been reusing skills that I already have. Reading Coders at Work gave me a strong push towards finding ways to start learning, experimenting, and trying new techniques again. It also filled me with enthusiasm for the process of programming, which immediately helped my productivity on my own coding projects. This is obviously a book whose primary target audience is practicing programmers, and while it doesn't go too far into language theory, I was relying on remembered terms and structure from my master's degree for a few of the interviews. I think it's approachable for anyone who has a working background in programming and a few languages or a CS degree, but it might be a stretch for someone new to the field. But even someone without any programming knowledge at all would get a lot out of the anecdotes and snapshots of the history of software development. Coders at Work is also full of jumping-off points for some additional research on Google or additional reading in other recommended books. I only have one complaint, which I have to mention in passing: for such a large book full of interesting ideas and book recommendations, the index is wholly inadequate. I tried looking up five or six things in it, including the source of some of the book recommendations that are collected in an appendix, and I struck out every time. It's very hard to find something again in 600 pages, and more attention paid to the index would have been greatly appreciated. But, despite that, for people within the target audience, I cannot recommend this book too highly. Not only was it full of useful information, at the level of programming above the code details that's often the hardest to talk about, but it's consistently entertaining and emotionally invigorating and inspiring. It made the rounds of tech blogs when it was first released, to nearly universal approval, and I can only echo that. If you're a practicing programmer, I don't think you'll regret spending a few weeks reading and thinking about this book. Rating: 10 out of 10

Next.

Previous.